8,806 research outputs found
Numerical Study of the Two-Species Vlasov-Amp\`{e}re System: Energy-Conserving Schemes and the Current-Driven Ion-Acoustic Instability
In this paper, we propose energy-conserving Eulerian solvers for the
two-species Vlasov-Amp\`{e}re (VA) system and apply the methods to simulate
current-driven ion-acoustic instability. The algorithm is generalized from our
previous work for the single-species VA system and Vlasov-Maxwell (VM) system.
The main feature of the schemes is their ability to preserve the total particle
number and total energy on the fully discrete level regardless of mesh size.
Those are desired properties of numerical schemes especially for long time
simulations with under-resolved mesh. The conservation is realized by explicit
and implicit energy-conserving temporal discretizations, and the discontinuous
Galerkin (DG) spatial discretizations. We benchmarked our algorithms on a test
example to check the one-species limit, and the current-driven ion-acoustic
instability. To simulate the current-driven ion-acoustic instability, a slight
modification for the implicit method is necessary to fully decouple the split
equations. This is achieved by a Gauss-Seidel type iteration technique.
Numerical results verified the conservation and performance of our methods
Stacking-based Deep Neural Network: Deep Analytic Network on Convolutional Spectral Histogram Features
Stacking-based deep neural network (S-DNN), in general, denotes a deep neural
network (DNN) resemblance in terms of its very deep, feedforward network
architecture. The typical S-DNN aggregates a variable number of individually
learnable modules in series to assemble a DNN-alike alternative to the targeted
object recognition tasks. This work likewise devises an S-DNN instantiation,
dubbed deep analytic network (DAN), on top of the spectral histogram (SH)
features. The DAN learning principle relies on ridge regression, and some key
DNN constituents, specifically, rectified linear unit, fine-tuning, and
normalization. The DAN aptitude is scrutinized on three repositories of varying
domains, including FERET (faces), MNIST (handwritten digits), and CIFAR10
(natural objects). The empirical results unveil that DAN escalates the SH
baseline performance over a sufficiently deep layer.Comment: 5 page
Energy-conserving discontinuous Galerkin methods for the Vlasov-Amp\`{e}re system
In this paper, we propose energy-conserving numerical schemes for the
Vlasov-Amp\`{e}re (VA) systems. The VA system is a model used to describe the
evolution of probability density function of charged particles under self
consistent electric field in plasmas. It conserves many physical quantities,
including the total energy which is comprised of the kinetic and electric
energy. Unlike the total particle number conservation, the total energy
conservation is challenging to achieve. For simulations in longer time ranges,
negligence of this fact could cause unphysical results, such as plasma self
heating or cooling. In this paper, we develop the first Eulerian solvers that
can preserve fully discrete total energy conservation. The main components of
our solvers include explicit or implicit energy-conserving temporal
discretizations, an energy-conserving operator splitting for the VA equation
and discontinuous Galerkin finite element methods for the spatial
discretizations. We validate our schemes by rigorous derivations and benchmark
numerical examples such as Landau damping, two-stream instability and
bump-on-tail instability
Maximally-fast coarsening algorithms
We present maximally-fast numerical algorithms for conserved coarsening
systems that are stable and accurate with a growing natural time-step . For non-conserved systems, only effectively finite timesteps
are accessible for similar unconditionally stable algorithms. We compare the
scaling structure obtained from our maximally-fast conserved systems directly
against the standard fixed-timestep Euler algorithm, and find that the error
scales as -- so arbitrary accuracy can be achieved.Comment: 5 pages, 3 postscript figures, Late
Longitudinal evidence for a midlife nadir in human well-being : results from four data sets
There is a large amount of cross-sectional evidence for a midlife low in the life cycle of human happiness and well-being (a ‘U shape’). Yet no genuinely longitudinal inquiry has uncovered evidence for a U-shaped pattern. Thus some researchers believe the U is a statistical artefact. We re-examine this fundamental cross-disciplinary question. We suggest a new test. Drawing on four data sets, and only within-person changes in well-being, we
document powerful support for a U-shape in unadjusted longitudinal data without the need for regression equations. The paper’s methodological contribution is to exploit the first-derivative properties of a well-being equation
Stacking-Based Deep Neural Network: Deep Analytic Network for Pattern Classification
Stacking-based deep neural network (S-DNN) is aggregated with pluralities of
basic learning modules, one after another, to synthesize a deep neural network
(DNN) alternative for pattern classification. Contrary to the DNNs trained end
to end by backpropagation (BP), each S-DNN layer, i.e., a self-learnable
module, is to be trained decisively and independently without BP intervention.
In this paper, a ridge regression-based S-DNN, dubbed deep analytic network
(DAN), along with its kernelization (K-DAN), are devised for multilayer feature
re-learning from the pre-extracted baseline features and the structured
features. Our theoretical formulation demonstrates that DAN/K-DAN re-learn by
perturbing the intra/inter-class variations, apart from diminishing the
prediction errors. We scrutinize the DAN/K-DAN performance for pattern
classification on datasets of varying domains - faces, handwritten digits,
generic objects, to name a few. Unlike the typical BP-optimized DNNs to be
trained from gigantic datasets by GPU, we disclose that DAN/K-DAN are trainable
using only CPU even for small-scale training sets. Our experimental results
disclose that DAN/K-DAN outperform the present S-DNNs and also the BP-trained
DNNs, including multiplayer perceptron, deep belief network, etc., without data
augmentation applied.Comment: 14 pages, 7 figures, 11 table
Distributed Lagrange Multiplier/Fictitious Domain Finite Element Method for a Transient Stokes Interface Problem with Jump Coefficients
The distributed Lagrange multiplier/fictitious domain (DLM/FD)-mixed finite element method is developed and analyzed in this paper for a transient Stokes interface problem with jump coefficients. The semi- and fully discrete DLM/FD-mixed finite element scheme are developed for the first time for this problem with a moving interface, where the arbitrary Lagrangian-Eulerian (ALE) technique is employed to deal with the moving and immersed subdomain. Stability and optimal convergence properties are obtained for both schemes. Numerical experiments are carried out for different scenarios of jump coefficients, and all theoretical results are validated
- …